18 research outputs found

    Scalable Detection and Isolation of Phishing

    Get PDF
    This paper presents a proposal for scalable detection and isolation of phishing. The main ideas are to move the protection from end users towards the network provider and to employ the novel bad neighborhood concept, in order to detect and isolate both phishing e-mail senders and phishing web servers. In addition, we propose to develop a self-management architecture that enables ISPs to protect their users against phishing attacks, and explain how this architecture could be evaluated. This proposal is the result of half a year of research work at the University of Twente (UT), and it is aimed at a Ph.D. thesis in 2012

    SNMP Trace Analysis: Results of Extra Traces

    Get PDF
    The Simple Network Management Protocol (SMMP) was introduced in the late 1980s. Since then, several evolutionary protocol changes have taken place, resulting in the SNMP version 3 framework (SNMPv3). Extensive use of SNMP has led to significant practical experience by both network operators and researchers. Since recently, researchers are in the possession of real world SNMP traces. This allows researchers to analyze the real world application of SNMP. A publication of 2007 made a significant start with this. However, real world trace analysis demands a continual approach, due to changing circumstances (e.g., regarding the network and SNMP engine implementations). Therefore, this paper reports on a lot more traces than in the mentioned paper, which are also more recent

    Pervasive gaps in Amazonian ecological research

    Get PDF
    Biodiversity loss is one of the main challenges of our time, and attempts to address it require a clear understanding of how ecological communities respond to environmental change across time and space. While the increasing availability of global databases on ecological communities has advanced our knowledge of biodiversity sensitivity to environmental changes, vast areas of the tropics remain understudied. In the American tropics, Amazonia stands out as the world's most diverse rainforest and the primary source of Neotropical biodiversity, but it remains among the least known forests in America and is often underrepresented in biodiversity databases. To worsen this situation, human-induced modifications may eliminate pieces of the Amazon's biodiversity puzzle before we can use them to understand how ecological communities are responding. To increase generalization and applicability of biodiversity knowledge, it is thus crucial to reduce biases in ecological research, particularly in regions projected to face the most pronounced environmental changes. We integrate ecological community metadata of 7,694 sampling sites for multiple organism groups in a machine learning model framework to map the research probability across the Brazilian Amazonia, while identifying the region's vulnerability to environmental change. 15%–18% of the most neglected areas in ecological research are expected to experience severe climate or land use changes by 2050. This means that unless we take immediate action, we will not be able to establish their current status, much less monitor how it is changing and what is being lost

    Internet Bad Neighborhoods Temporal Behavior

    Get PDF
    Malicious hosts tend to be concentrated in certain areas of the IP addressing space, forming the so-called Bad Neighborhoods. Knowledge about this concentration is valuable in predicting attacks from unseen IP addresses. This observation has been employed in previous works to filter out spam. In this paper, we focus on the temporal behavior of bad neighborhoods. The goal is to determine if bad neighborhoods strike multiple times over a certain period of time, and if so, when do the attacks occur. Among other findings, we show that even though bad neighborhoods do not exhibit a favorite combination of days to carry out attacks, 85% of the recurrent bad neighborhoods do carry out a second attack within the first 5 days from the first attack. These and the other findings here presented lead to several considerations on how attack prediction models can be more effective i.e., generating both predictive and short neighborhood blacklists.POLGTechnology, Policy and Managemen

    Taking on Internet Bad Neighorhoods

    No full text
    It's known fact that malicious IP addresses are not evenly distributed over the IP addressing space. In this paper, we frame networks concentrating malicious addresses as bad neighborhoods. We propose a formal definition and show this concentration can be used to predict future attacks (new spamming sources, in our case), and propose an algorithm to aggregate individual IP addresses can bigger neighborhoods. Moreover, we show how bad neighborhoods are specific according to the exploited application (e.g., spam, ssh) and how the performance of different blacklist sources impacts lightweight spam filtering algorithms.POLGTechnology, Policy and Managemen

    Evaluating the Impact of AbuseHUB on Botnet Mitigation

    No full text
    This documents presents the final report of a two-year project to evaluate the impact of AbuseHUB, a Dutch clearinghouse for acquiring and processing abuse data on infected machines. The report was commissioned by the Netherlands Ministry of Economic Affairs, a co-funder of the development of AbuseHUB. AbuseHUB is the initiative of 9 Internet Service Providers, SIDN (the registry for the .nl top-level domain) and Surfnet (the national research and education network operator). The key objective of AbuseHUB is to improve the mitigation of botnets by its members. We set out to assess whether this objective is being reached by analyzing malware infection levels in the networks of AbuseHUB members and comparing them to those of other Internet Service Providers (ISPs). Since AbuseHUB members together comprise over 90 percent of the broadband market in the Netherlands, it also makes sense to compare how the country as a whole has performed compared to other countries. This report complements the baseline measurement report produced in December 2013 and the interim report from March 2015. We are using the same data sources as in the interim report, which is an expanded set compared to the earlier baseline report and to our 2011 study into botnet mitigation in the Netherlands.Organisation and Governanc

    No domain left behind: Is Let's Encrypt democratizing encryption?

    No full text
    The 2013 National Security Agency revelations of pervasive monitoring have led to an "encryption rush" across the computer and Internet industry. To push back against massive surveillance and protect users' privacy, vendors, hosting and cloud providers have widely deployed encryption on their hardware, communication links, and applications. As a consequence, most web connections nowadays are encrypted. However, there is still a significant part of Internet traffic that is not encrypted. It has been argued that both costs and complexity associated with obtaining and deploying X.509 certificates are major barriers for widespread encryption, since these certificates are required to establish encrypted connections. To address these issues, the Electronic Frontier Foundation, Mozilla Foundation, the University of Michigan and a number of partners have set up Let's Encrypt (LE), a certificate authority that provides both free X.509 certificates and software that automates the deployment of these certificates. In this paper, we investigate if LE has been successful in democratizing encryption: we analyze certificate issuance in the first year of LE and show from various perspectives that LE adoption has an upward trend and it is in fact being successful in covering the lower-cost end of the hosting market.Organisation and GovernanceCyber SecurityInformation and Communication Technolog

    How Dynamic is the ISPs Address Space? Towards Internet-Wide DHCP Churn Estimation

    No full text
    IP address counts are typically used as a surrogate metric for the number of hosts in a network, as in the case of ISP rankings based on botnet infected addresses. However, due to effects of dynamic IP address allocation, such counts tend to overestimate the number of hosts, sometimes by an order of magnitude. In the literature, the rate at which hosts change IP addresses is referred to as DHCP churn. Churn rates vary significantly within and among ISP networks, and such variation poses a challenge to any research that relies upon IP addresses as a metric. We present the first attempt towards estimating ISP and Internet-wide DHCP churn rates, in order to better understand the relation between IP addresses and hosts, as well as allow us to correct data relying on IP addresses as a surrogate metric. We propose an scalable active measurement methodology and then validate it using ground truth data from a medium-sized ISP. Next, we build a statistical model to estimate DHCP churn rates and validate against the ground truth data of the same ISP, estimating correctly 72.3% of DHCP churn rates. Finally, we apply our measurement methodology to four major ISPs, triangulate the results to another Internet census, and discuss the next steps to more precisely estimate DHCP churn rates.Multi Actor SystemsTechnology, Policy and Managemen
    corecore